Blog
/

Ransomware

Inside the SOC

/
September 6, 2021

What Are the Early Signs of a Ransomware Attack?

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
06
Sep 2021
Discover the early signs of ransomware and how to defend against it. Often attack is the best form of defense with cybersecurity. Learn more here!

The deployment of ransomware is the endgame of a cyber-attack. A threat actor must have accomplished several previous steps – including lateral movement and privilege escalation – to reach this final position. The ability to detect and counter the early moves is therefore just as important as detecting the encryption itself.

Attackers are using diverse strategies – such as ‘Living off the Land’ and carefully crafting their command and control (C2) – to blend in with normal network traffic and evade traditional security defenses. The analysis below examines the Tactics, Techniques and Procedures (TTPs) used by many ransomware actors by unpacking a compromise which occurred at a defense contractor in Canada.

Phases of a ransomware attack

Figure 1: Timeline of the attack.

The opening: Initial access to privileged account

The first indicator of compromise was a login on a server with an unusual credential, followed by unusual admin activity. The attacker may have gained access to the username and password in a number of ways, from credential stuffing to buying them on the Dark Web. As the attacker had privileged access from the get-go, there was no need for privilege escalation.

Lateral movement

Two days later, the attacker began to spread from the initial server. The compromised server began to send out unusual Windows Management Instrumentation (WMI) commands.

It began remotely controlling four other devices – authenticating on them with a single admin credential. One of the destinations was a domain controller (DC), another was a backup server.

By using WMI – a common admin tool – for lateral movement, the attacker opted to ‘live off the land’ rather than introduce a new lateral movement tool, aiming to remain unnoticed by the company’s security stack. The unusual use of WMI was picked up by Darktrace and the timings of the unusual WMI connections were pieced together by Cyber AI Analyst.

Models:

  • New or Uncommon WMI Activity
  • AI Analyst / Extensive Chain of Administrative Connections

Establish C2

The four devices then connected to the IP 185.250.151[.]172. Three of them, including the DC and backup server, established SSL beacons to the IP using the dynamic DNS domain goog1e.ezua[.]com.

The C2 endpoints had very little open-source intelligence (OSINT) available, but it seems that a Cobalt Strike-style script had used the endpoint in the past. This suggests complex tooling, as the attacker used dynamic SSL and spoofed Google to mask their beaconing.

Interestingly, through the entirety of the attack, only these three devices used SSL connections for beaconing, while later C2 occurred over unencrypted protocols. It appears these three critical devices were treated differently to the other infected devices on the network.

Models:

  • Immediate breach of Anomalous External Activity from Critical Network Device, then several model breaches involving beaconing and SSL to dynamic DNS. (Domain Controller DynDNS SSL or HTTP was particularly specific to this activity.)

The middle game: Internal reconnaissance and further lateral movement

The attack chain took the form of two cycles of lateral movement, followed by establishing C2 at the newly controlled destinations.

Figure 2: Observed chain of lateral movement and C2.

So, after establishing C2, the DC made WMI requests to 20 further IPs over an extended period. It also scanned 234 IPs via ICMP pings, presumably in an attempt to find more hosts.

Many of these were eventually found with ransom notes, in particular when the targeted devices were hypervisors. The ransomware was likely deployed with remote commands via WMI.

Models:

  • AI Analyst / Suspicious Chain of Administrative Connections (from the initial server to the DC to the hypervisor)
  • AI Analyst / Extensive Suspicious WMI Activity (from the DC)
  • Device / ICMP Address Scan, Scanning of Multiple Devices AI Analyst incident (from the DC)

Further C2

As the second stage of lateral movement stopped, a second stage of unencrypted C2 was seen from five new devices. Each started with GET requests to the IP seen in the SSL C2 (185.250.151[.]172), which used the spoofed hostname google[.]com.

Activity started on each device with HTTP requests for a URI ending in .png, before a more consistent beaconing to the URI /books/. Eventually, the devices made POST requests to the URI /ebooks/?k= (a unique identifier for each device). All this appears to be a way of concealing a C2 beacon in what looks like plausible traffic to Google.

In this way, by encrypting some C2 connections with SSL to a Dynamic DNS domain, while crafting other unencrypted HTTP to look like traffic to google[.]com, the attacker managed to operate undetected by the company’s antivirus tools.

Darktrace identified this anomalous activity and generated a large number of external connectivity model breaches.

Models:

  • Eight breaches of Compromise / HTTP Beaconing to New Endpoint from the affected devices

Accomplish mission: Checkmate

Finally, the attacker deployed ransomware. In the ransom note, they stated that sensitive information had been exfiltrated and would be leaked if the company did not pay.

However, this was a lie. Darktrace confirmed that no data had been exfiltrated, as the C2 communications had sent far too little data. Lying about data exfiltration in order to extort a ransom is a common tactic for attackers, and visibility is crucial to determine whether a threat actor is bluffing.

In addition, Antigena – Darktrace’s Autonomous Response technology – blocked an internal download from one of the servers compromised in the first round of lateral movement, because it was an unusual incoming data volume for the client device. This was most likely the attacker attempting to transfer data in preparation for the end goal, so the block may have prevented this data from being moved for exfiltration.

Figure 3: Antigena model breach.

Figure 4: Device is blocked from SMB communication with the compromised server three seconds later.

Models:

  • Unusual Incoming Data Volume
  • High Volume Server Data Transfer

Unfortunately, Antigena was not active on the majority of the devices involved in the incident. If in active mode, Antigena would have stopped the early stages of this activity, including the unusual administrative logins and beaconing. The customer is now working to fully configure Antigena, so they benefit from 24/7 Autonomous Response.

Cyber AI Analyst investigates

Darktrace’s AI spotted and reported on beaconing from several devices including the DC, which was the highest scoring device for unusual behavior at the time of the activity. It condensed this information into three incidents – ‘Possible SSL Command and Control’, ‘Extensive Suspicious Remote WMI Activity’, and ‘Scanning of Remote Devices’.

Crucially, Cyber AI Analyst not only summarized the admin activity from the DC but also linked it back to the first device through an unusual chain of administrative connections.

Figure 5: Cyber AI Analyst incident showing a suspicious chain of administrative connections linking the first device in the chain of connections to a hypervisor where a ransom note was found via the compromised DC, saving valuable time in the investigation. It also highlights the credential common to all of the lateral movement connections.

Finding lateral movement chains manually is a laborious process well suited to AI. In this case, it enabled the security team to quickly trace back to the device which was the likely source of the attack and find the common credential in the connections.

Play the game like a machine

To get the full picture of a ransomware attack, it is important to look beyond the final encryption to previous phases of the kill chain. In the attack above, the encryption itself did not generate network traffic, so detecting the intrusion at its early stages was vital.

Despite the attacker ‘Living off the Land’ and using WMI with a compromised admin credential, as well as spoofing the common hostname google[.]com for C2 and applying dynamic DNS for SSL connections, Darktrace was able to identify all the stages of the attack and immediately piece them together into a meaningful security narrative. This would have been almost impossible for a human analyst to achieve without labor-intensive checking of the timings of individual connections.

With ransomware infections becoming faster and more frequent, with the threat of offensive AI looming closer and the Dark Web marketplace thriving, with security teams drowning under false positives and no time left on the clock, AI is now an essential part of any security solution. The board is set, the time is ticking, the stakes are higher than ever. Your move.

Thanks to Darktrace analyst Daniel Gentle for his insights on the above threat find.

IoCs:

IoCComment185.250.151[.]172IP address used for both HTTP and SSL C2goog1e.ezua[.]comDynamic DNS Hostname used for SSL C2

Darktrace model detections:

  • AI Analyst models:
  • Extensive Suspicious WMI Activity
  • Suspicious Chain of Administrative Connections
  • Scanning of Multiple Devices
  • Possible SSL Command and Control
  • Meta model:
  • Device / Large Number of model breaches
  • External connectivity models:
  • Anonymous Server Activity / Domain Controller DynDNS SSL or HTTP
  • Compromise / Suspicious TLS Beaconing to Rare External
  • Compromise / Beaconing Activity To External Rare
  • Compromise / SSL to DynDNS
  • Anomalous Server Activity / External Activity from Critical Network Device
  • Compromise / Sustained SSL or HTTP Increase
  • Compromise / Suspicious Beaconing Behaviour
  • Compromise / HTTP Beaconing to New Endpoint
  • Internal activity models:
  • Device / New or Uncommon WMI Activity
  • User / New Admin Credentials on Client
  • Device / ICMP Address Scan
  • Anomalous Connection / Unusual Incoming Data Volume
  • Unusual Activity / High Volume Server Data Transfer

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Brianna Leddy
Director of Analysis

Based in San Francisco, Brianna is Director of Analysis at Darktrace. She joined the analyst team in 2016 and has since advised a wide range of enterprise customers on advanced threat hunting and leveraging Self-Learning AI for detection and response. Brianna works closely with the Darktrace SOC team to proactively alert customers to emerging threats and investigate unusual behavior in enterprise environments. Brianna holds a Bachelor’s degree in Chemical Engineering from Carnegie Mellon University.

Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

September 26, 2024

/

Inside the SOC

Thread Hijacking: How Attackers Exploit Trusted Conversations to Infiltrate Networks

Default blog imageDefault blog image

What is Thread Hijacking?

Cyberattacks are becoming increasingly stealthy and targeted, with malicious actors focusing on high-value individuals to gain privileged access to their organizations’ digital environments. One technique that has gained prominence in recent years is thread hijacking. This method allows attackers to infiltrate ongoing conversations, exploiting the trust within these threads to access sensitive systems.

Thread hijacking typically involves attackers gaining access to a user’s email account, monitoring ongoing conversations, and then inserting themselves into these threads. By replying to existing emails, they can send malicious links, request sensitive information, or manipulate the conversation to achieve their goals, such as redirecting payments or stealing credentials. Because such emails appear to come from a trusted source, they often bypass human security teams and traditional security filters.

How does threat hijacking work?

  1. Initial Compromise: Attackers first gain access to a user’s email account, often through phishing, malware, or exploiting weak passwords.
  2. Monitoring: Once inside, they monitor the user’s email threads, looking for ongoing conversations that can be exploited.
  3. Infiltration: The attacker then inserts themselves into these conversations, often replying to existing emails. Because the email appears to come from a trusted source within an ongoing thread, it bypasses many traditional security filters and raises less suspicion.
  4. Exploitation: Using the trust established in the conversation, attackers can send malicious links, request sensitive information, or manipulate the conversation to achieve their goals, such as redirecting payments or stealing credentials.

A recent incident involving a Darktrace customer saw a malicious actor attempt to manipulate trusted email communications, potentially exposing critical data. The attacker created a new mailbox rule to forward specific emails to an archive folder, making it harder for the customer to notice the malicious activity. This highlights the need for advanced detection and robust preventive tools.

Darktrace’s Self-Learning AI is able to recognize subtle deviations in normal behavior, whether in a device or a Software-as-a-Service (SaaS) user. This capability enables it to detect emerging attacks in their early stages. In this post, we’ll delve into the attacker’s tactics and illustrate how Darktrace / IDENTITY™ successfully identified and mitigated a thread hijacking attempt, preventing escalation and potential disruption to the customer’s network.

Threat hijacking attack overview & Darktrace coverage

On August 8, 2024, Darktrace detected an unusual email received by a SaaS account on a customer’s network. The email appeared to be a reply to a previous chain discussing tax and payment details, likely related to a transaction between the customer and one of their business partners.

Headers of the suspicious email received.
Figure 1: Headers of the suspicious email received.

A few hours later, Darktrace detected the same SaaS account creating a new mailbox rule named “.”, a tactic commonly used by malicious actors to evade detection when setting up new email rules [2]. This rule was designed to forward all emails containing a specific word to the user’s “Archives” folder. This evasion technique is typically used to move any malicious emails or responses to a rarely opened folder, ensuring that the genuine account holder does not see replies to phishing emails or other malicious messages sent by attackers [3].

Darktrace recognized the newly created email rule as suspicious after identifying the following parameters:

  • AlwaysDeleteOutlookRulesBlob: False
  • Force: False
  • MoveToFolder: Archive
  • Name: “.”
  • FromAddressContainsWords: [Redacted]
  • MarkAsRead: True
  • StopProcessingRules: True

Darktrace also noted that the user attempting to create this new email rule had logged into the SaaS environment from an unusual IP address. Although the IP was located in the same country as the customer and the ASN used by the malicious actor was typical for the customer’s network, the rare IP, coupled with the anomalous behavior, raised suspicions.

Figure 2: Hijacked SaaS account creating the new mailbox rule.

Given the suspicious nature of this activity, Darktrace’s Security Operations Centre (SOC) investigated the incident and alerted the customer’s security team of this incident.

Due to a public holiday in the customer's location (likely an intentional choice by the threat actor), their security team did not immediately notice or respond to the notification. Fortunately, the customer had Darktrace's Autonomous Response capability enabled, which allowed it to take action against the suspicious SaaS activity without human intervention.

In this instance, Darktrace swiftly disabled the seemingly compromised SaaS user for 24 hours. This action halted the spread of the compromise to other accounts on the customer’s SaaS platform and prevented any sensitive data exfiltration. Additionally, it provided the security team with ample time to investigate the threat and remove the user from their environment. The customer also received detailed incident reports and support through Darktrace’s Security Operations Support service, enabling direct communication with Darktrace’s expert Analyst team.

Conclusion

Ultimately, Darktrace’s anomaly-based detection allowed it to identify the subtle deviations from the user’s expected behavior, indicating a potential compromise on the customer’s SaaS platform. In this case, Darktrace detected a login to a SaaS platform from an unusual IP address, despite the attacker’s efforts to conceal their activity by using a known ASN and logging in from the expected country.

Despite the attempted SaaS hijack occurring on a public holiday when the customer’s security team was likely off-duty, Darktrace autonomously detected the suspicious login and the creation of a new email rule. It swiftly blocked the compromised SaaS account, preventing further malicious activity and safeguarding the organization from data exfiltration or escalation of the compromise.

This highlights the growing need for AI-driven security capable of responding to malicious activity in the absence of human security teams and detect subtle behavioral changes that traditional security tools.

Credit to: Ryan Traill, Threat Content Lead for his contribution to this blog

Appendices

Darktrace Model Detections

SaaS / Compliance / Anomalous New Email Rule

Experimental / Antigena Enhanced Monitoring from SaaS Client Block

Antigena / SaaS / Antigena Suspicious SaaS Activity Block

Antigena / SaaS / Antigena Email Rule Block

References

[1] https://blog.knowbe4.com/whats-the-best-name-threadjacking-or-man-in-the-inbox-attacks

[2] https://darktrace.com/blog/detecting-attacks-across-email-saas-and-network-environments-with-darktraces-combined-ai-approach

[3] https://learn.microsoft.com/en-us/defender-xdr/alert-grading-playbook-inbox-manipulation-rules

Continue reading
About the author
Maria Geronikolou
Cyber Analyst

Blog

/

September 26, 2024

/
No items found.

How AI can help CISOs navigate the global cyber talent shortage

Default blog imageDefault blog image

The global picture

4 million cybersecurity professionals are needed worldwide to protect and defend the digital world – twice the number currently in the workforce.1

Innovative technologies are transforming business operations, enabling access to new markets, personalized customer experiences, and increased efficiency. However, this digital transformation also challenges Security Operations Centers (SOCs) with managing and protecting a complex digital environment without additional resources or advanced skills.

At the same time, the cybersecurity industry is suffering a severe global skills shortage, leaving many SOCs understaffed and under-skilled. With a 72% increase in data breaches from 2021-20232, SOCs are dealing with overwhelming alert volumes from diverse security tools. Nearly 60% of cybersecurity professionals report burnout3, leading to high turnover rates. Consequently, only a fraction of alerts are thoroughly investigated, increasing the risk of undetected breaches. More than half of organizations that experienced breaches in 2024 admitted to having short-staffed SOCs.4

How AI can help organizations do more with less

Cyber defense needs to evolve at the same pace as cyber-attacks, but the global skills shortage is making that difficult. As threat actors increasingly abuse AI for malicious purposes, using defensive AI to enable innovation and optimization at scale is reshaping how organizations approach cybersecurity.

The value of AI isn’t in replacing humans, but in augmenting their efforts and enabling them to scale their defense capabilities and their value to the organization. With AI, cybersecurity professionals can operate at digital speed, analyzing vast data sets, identifying more vulnerabilities with higher accuracy, responding and triaging faster, reducing risks, and implementing proactive measures—all without additional staff.

Research indicates that organizations leveraging AI and automation extensively in security functions—such as prevention, detection, investigation, or response—reduced their average mean time to identify (MTTI) and mean time to contain (MTTC) data breaches by 33% and 43%, respectively. These organizations also managed to contain breaches nearly 100 days faster on average compared to those not using AI and automation.5

First, you've got to apply the right AI to the right security challenge. We dig into how different AI technologies can bridge specific skills gaps in the CISO’s Guide to Navigating the Cybersecurity Skills Shortage.

Cases in point: AI as a human force multiplier

Let’s take a look at just some of the cybersecurity challenges to which AI can be applied to scale defense efforts and relieve the burden on the SOC. We go further into real-life examples in our white paper.

Automated threat detection and response

AI enables 24/7 autonomous response, eliminating the need for after-hours SOC shifts and providing security leaders with peace of mind. AI can scale response efforts by analyzing vast amounts of data in real time, identifying anomalies, and initiating precise autonomous actions to contain incidents, which buys teams time for investigation and remediation.  

Triage and investigation

AI enhances the triage process by automatically categorizing and prioritizing security alerts, allowing cybersecurity professionals to focus on the most critical threats. It creates a comprehensive picture of an attack, helps identify its root cause, and generates detailed reports with key findings and recommended actions.  

Automation also significantly reduces overwhelming alert volumes and high false positive rates, enabling analysts to concentrate on high-priority threats and engage in more proactive and strategic initiatives.

Eliminating silos and improving visibility across the enterprise

Security and IT teams are overwhelmed by the technological complexity of operating multiple tools, resulting in manual work and excessive alerts. AI can correlate threats across the entire organization, enhancing visibility and eliminating silos, thereby saving resources and reducing complexity.

With 88% of organizations favoring a platform approach over standalone solutions, many are consolidating their tech stacks in this direction. This consolidation provides native visibility across clouds, devices, communications, locations, applications, people, and third-party security tools and intelligence.

Upskilling your existing talent in AI

As revealed in the State of AI Cybersecurity Survey 2024, only 26% of cybersecurity professionals say they have a full understanding of the different types of AI in use within security products.6

Understanding AI can upskill your existing staff, enhancing their expertise and optimizing business outcomes. Human expertise is crucial for the effective and ethical integration of AI. To enable true AI-human collaboration, cybersecurity professionals need specific training on using, understanding, and managing AI systems. To make this easier, the Darktrace ActiveAI Security Platform is designed to enable collaboration and reduce the learning curve – lowering the barrier to entry for junior or less skilled analysts.  

However, to bridge the immediate expertise gap in managing AI tools, organizations can consider expert managed services that take the day-to-day management out of the SOC’s hands, allowing them to focus on training and proactive initiatives.

Conclusion

Experts predict the cybersecurity skills gap will continue to grow, increasing operational and financial risks for organizations. AI for cybersecurity is crucial for CISOs to augment their teams and scale defense capabilities with speed, scalability, and predictive insights, while human expertise remains vital for providing the intuition and problem-solving needed for responsible and efficient AI integration.

If you’re thinking about implementing AI to solve your own cyber skills gap, consider the following:

  • Select an AI cybersecurity solution tailored to your specific business needs
  • Review and streamline existing workflows and tools – consider a platform-based approach to eliminate inefficiencies
  • Make use of managed services to outsource AI expertise
  • Upskill and reskill existing talent through training and education
  • Foster a knowledge-sharing culture with access to knowledge bases and collaboration tools

Interested in how AI could augment your SOC to increase efficiency and save resources? Read our longer CISO’s Guide to Navigating the Cybersecurity Skills Shortage.

And to better understand cybersecurity practitioners' attitudes towards AI, check out Darktrace’s State of AI Cybersecurity 2024 report.

References

  1. https://www.isc2.org/research  
  2. https://www.forbes.com/advisor/education/it-and-tech/cybersecurity-statistics/  
  3. https://www.informationweek.com/cyber-resilience/the-psychology-of-cybersecurity-burnout  
  4. https://www.ibm.com/downloads/cas/1KZ3XE9D  
  5. https://www.ibm.com/downloads/cas/1KZ3XE9D  
  6. https://darktrace.com/resources/state-of-ai-cyber-security-2024
Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI